Photorealistic frontal view synthesis from a single face image has a widerange of applications in the field of face recognition. Although data-drivendeep learning methods have been proposed to address this problem by seekingsolutions from ample face data, this problem is still challenging because it isintrinsically ill-posed. This paper proposes a Two-Pathway GenerativeAdversarial Network (TP-GAN) for photorealistic frontal view synthesis bysimultaneously perceiving global structures and local details. Four landmarklocated patch networks are proposed to attend to local textures in addition tothe commonly used global encoder-decoder network. Except for the novelarchitecture, we make this ill-posed problem well constrained by introducing acombination of adversarial loss, symmetry loss and identity preserving loss.The combined loss function leverages both frontal face distribution andpre-trained discriminative deep face models to guide an identity preservinginference of frontal views from profiles. Different from previous deep learningmethods that mainly rely on intermediate features for recognition, our methoddirectly leverages the synthesized identity preserving image for downstreamtasks like face recognition and attribution estimation. Experimental resultsdemonstrate that our method not only presents compelling perceptual results butalso outperforms state-of-the-art results on large pose face recognition.
展开▼